|
In numerical analysis, Romberg's method is used to estimate the definite integral : by applying Richardson extrapolation repeatedly on the trapezium rule or the rectangle rule (midpoint rule). The estimates generate a triangular array. Romberg's method is a Newton–Cotes formula – it evaluates the integrand at equally spaced points. The integrand must have continuous derivatives, though fairly good results may be obtained if only a few derivatives exist. If it is possible to evaluate the integrand at unequally spaced points, then other methods such as Gaussian quadrature and Clenshaw–Curtis quadrature are generally more accurate. The method is named after (1909–2003), who published the method in 1955. == Method == Using :, the method can be inductively defined by : : or : where and . In big O notation, the error for ''R''(''n'', ''m'') is : : The zeroeth extrapolation, ''R''(''n'', 0), is equivalent to the trapezoidal rule with 2''n'' + 1 points; the first extrapolation, ''R''(''n'', 1), is equivalent to Simpson's rule with 2''n'' + 1 points. The second extrapolation, ''R''(''n'', 2), is equivalent to Boole's rule with 2''n'' + 1 points. Further extrapolations differ from Newton Cotes formulas. In particular further Romberg extrapolations expand on Boole's rule in very slight ways, modifying weights into ratios similar as in Boole's rule. In contrast, further Newton Cotes methods produce increasingly differing weights, eventually leading to large positive and negative weights. This is indicative of how large degree interpolating polynomial Newton Cotes methods fail to converge for many integrals, while Romberg integration is more stable. When function evaluations are expensive, it may be preferable to replace the polynomial interpolation of Richardson with the rational interpolation proposed by . 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Romberg's method」の詳細全文を読む スポンサード リンク
|